45 research outputs found
A Bio-Inspired Model for Visual Collision Avoidance on a Hexapod Walking Robot
Meyer HG, Bertrand O, Paskarbeit J, Lindemann JP, Schneider A, Egelhaaf M. A Bio-Inspired Model for Visual Collision Avoidance on a Hexapod Walking Robot. In: Lepora FN, Mura A, Mangan M, Verschure FMJP, Desmulliez M, Prescott JT, eds. Biomimetic and Biohybrid Systems: 5th International Conference, Living Machines 2016, Edinburgh, UK, July 19-22, 2016. Proceedings. Cham: Springer International Publishing; 2016: 167-178.While navigating their environments it is essential for
autonomous mobile robots to actively avoid collisions with obstacles.
Flying insects perform this behavioural task with ease relying mainly
on information the visual system provides. Here we implement a bioinspired
collision avoidance algorithm based on the extraction of nearness
information from visual motion on the hexapod walking robot platform
HECTOR. The algorithm allows HECTOR to navigate cluttered environments
while actively avoiding obstacles
Honeybees Learn Landscape Features during Exploratory Orientation Flights
Degen J, Kirbach A, Reiter L, et al. Honeybees Learn Landscape Features during Exploratory Orientation Flights. Current Biology. 2016;26(20):2800-2804
Pattern-Dependent Response Modulations in Motion-Sensitive Visual Interneurons—A Model Study
Even if a stimulus pattern moves at a constant velocity across the receptive field of motion-sensitive neurons, such as lobula plate tangential cells (LPTCs) of flies, the response amplitude modulates over time. The amplitude of these response modulations is related to local pattern properties of the moving retinal image. On the one hand, pattern-dependent response modulations have previously been interpreted as 'pattern-noise', because they deteriorate the neuron's ability to provide unambiguous velocity information. On the other hand, these modulations might also provide the system with valuable information about the textural properties of the environment. We analyzed the influence of the size and shape of receptive fields by simulations of four versions of LPTC models consisting of arrays of elementary motion detectors of the correlation type (EMDs). These models have previously been suggested to account for many aspects of LPTC response properties. Pattern-dependent response modulations decrease with an increasing number of EMDs included in the receptive field of the LPTC models, since spatial changes within the visual field are smoothed out by the summation of spatially displaced EMD responses. This effect depends on the shape of the receptive field, being the more pronounced - for a given total size - the more elongated the receptive field is along the direction of motion. Large elongated receptive fields improve the quality of velocity signals. However, if motion signals need to be localized the velocity coding is only poor but the signal provides – potentially useful – local pattern information. These modelling results suggest that motion vision by correlation type movement detectors is subject to uncertainty: you cannot obtain both an unambiguous and a localized velocity signal from the output of a single cell. Hence, the size and shape of receptive fields of motion sensitive neurons should be matched to their potential computational task
Representation of visual motion information in the fly brain
Meyer HG. Representation of visual motion information in the fly brain. Bielefeld; 2014
HDL and Software Sources for Bio-Inspired Visual Collision Avoidance on the Hexapod Robot HECTOR
Meyer HG, Klimeck D. HDL and Software Sources for Bio-Inspired Visual Collision Avoidance on the Hexapod Robot HECTOR. Bielefeld University; 2020.# HDL and software sources for bio-inspired visual collision avoidance on the hexapod walking robot HECTOR
CITEC - Center of Excellence Cognitive Interaction Technology, Bielefeld University, 2020
__Developers:__
* Daniel Klimeck - [email protected]
* Hanno Gerd Meyer - [email protected]
__Description:__
The repository contains the VHDL-based cores realizing bio-inspired visual processing on a Xilinx-based Zynq-7000 SoC as well as the complementary software sources to enable the hexapod walking robot HECTOR to perform bio-inspired visual collision avoidance. The vision-based direction controller used is based upon:
[1] Bertrand et al. (2015)
A Bio-inspired Collision Avoidance Model Based on Spatial Information Derived from Motion Detectors Leads to Common Routes
PLoS Comput Biol. 2015; 11(11):e1004339
doi: 10.1371/journal.pcbi.1004339
[2] Meyer et al. (2016)
A Bio-Inspired Model for Visual Collision Avoidance on a Hexapod Walking Robot.
In: Biomimetic and Biohybrid Systems: 5th International Conference, Living Machines 2016, Edinburgh, UK, July 19-22, 2016. Proceedings; 2016. p. 167--178.
doi: 10.1007/978-3-319-42417-0_16
[3] Klimeck et al. (2018)
Resource-efficient Reconfigurable Computer-on-Module for Embedded Vision Applications.
In: 2018 IEEE 29th International Conference on Application-specific Systems, Architectures and Processors (ASAP); 2018. p. 1--4.
doi: 10.1109/ASAP.2018.8445091
The interfaces for the image data transmission between the VHDL-based cores are based on the AXI4-Stream protocol specification. Xilinx-based cores that are used for realizing the processing within the Zynq device are marked in the VHDL code. The sources of the Xilinx-based cores are not included within this repository.
The processing pipeline for the resource-efficient insect-inspired visual processing within the FPGA looks like the following:
ReMap - SA - HPF - LPF - EMD - ME - ANV
After processing of the camera images by the Zynq hardware, the Average Nearness Vector (ANV) is used to control the walking direction of the hexapod walking robot HECTOR. In the experimental setup HECTOR obtains its absolute position and orientation within the arena using a system for tracking visual markers. Based on the direction to a goal location location and the ANV the walking direction is computed. See [1-3] for further details.
The content of this repository is structured as follows:
```
- VHDL
-- ANV (Average Nearness Vector)
--- anv.vhd
-- EMD (Elementary Motion Detector)
--- emd.vhd
-- HPF (High Pass Filter)
--- AXI4-Lite.vhd
--- hpf.vhd
-- LPF (Low Pass Filter)
--- AXI4-Lite.vhd
--- lpf.vhd
-- ME (Motion Energy)
--- me.vhd
-- ReMap (Remapping and Scaling)
--- mem_init_files
---- ORDERout_bin_ROM.coe
---- ORDERx_bin_ROM.M#coe
---- ORDERydiff_bin_ROM.coe
--- AXI4-Lite.vhd
--- remap.vhd
-- SA (Sensitivity Adaption)
--- sa.vhd
- python
-- __init.py__
-- auto_visionmodule_twb.ini (Configuration file)
-- auto_visionmodule_twb.py (Main script)
-- behavior (Computation of heading direction)
--- __init.py__
--- CollisionAvoidance.py
-- camera (Communication with Zynq hardware)
--- __init.py__
--- vision_module
---- __init.py__
---- VisionModuleClient.py
-- control (Control of HECTOR's walking direction)
--- __init.py__
--- Control.py
-- joystick (Manual control of HECTOR's walking direction)
--- __init.py__
--- client
---- __init.py__
---- JoystickClient.py
--- server
---- JoystickServer.py
--- standalone
---- __init.py__
---- JoystickStandalone.py
-- logging (Logging of runtime data)
--- __init.py__
--- logclient_demo.py
--- client
---- __init.py__
---- LogClient.py
--- server
----__init.py__
---- LogServer.py
-- twb (Interface to the marker tracking of the teleworkbench)
--- __init.py__
--- bridge_client
---- __init.py__
---- TWBBridgeClient.py
--- twb_bridge
---- (...)
-- visualization (Visualization of the processed camera images and walking directions)
--- __init.py__
--- client
---- __init.py__
---- VisualizationClient.py
--- server
---- __init.py__
____ VisualizationServer.py
``
Manual and semi-automatic determination of elbow angle-independent parameters for a model of the biceps brachii distal tendon based on ultrasonic imaging
Mechtenberg M, Grimmelsmann N, Meyer HG, Schneider A. Manual and semi-automatic determination of elbow angle-independent parameters for a model of the biceps brachii distal tendon based on ultrasonic imaging. PLoS ONE . 2022;17(10): e0275128.Tendons consist of passive soft tissue with non linear material properties. They play a key role in force transmission from muscle to skeletal structure. The properties of tendons have been extensively examined in vitro. In this work, a non linear model of the distal biceps brachii tendon was parameterized based on measurements of myotendinous junction displacements in vivo at different load forces and elbow angles. The myotendinous junction displacement was extracted from ultrasound B-mode images within an experimental setup which also allowed for the retrieval of the exerted load forces as well as the elbow joint angles. To quantify the myotendinous junction movement based on visual features from ultrasound images, a manual and an automatic method were developed. The performance of both methods was compared. By means of exemplary data from three subjects, reliable fits of the tendon model were achieved. Further, different aspects of the non linear tendon model generated in this way could be reconciled with individual experiments from literature
Motion parallax for object localization in electric fields
Hunke K, Engelmann J, Meyer HG, Schneider A. Motion parallax for object localization in electric fields. Bioinspiration and Biomimetics. 2021.Parallax, as a visual effect, is used for depth perception of objects. But is there also the effect of parallax in the context of electric field imagery? In this work, the example of weakly electric fish is used to investigate how the self-generated electric field that these fish utilize for orientation and communication alike, may be used as a template to define electric parallax. The skin of the electric fish possesses a vast amount of electroreceptors that detect the self-emitted dipole-like electric field. In this work, the weakly electric fish is abstracted as an electric dipole with a senor line in between the two emitters. With an analytical description of the object distortion for a uniform electric field, the distortion in a dipole-like field is simplified and simulated. On the basis of this simulation, the parallax effect could be demonstrated in electric field images i.e. by closer inspection of voltage profiles on the sensor line. Therefore, electric parallax can be defined as the relative movement of a signal feature of the voltage profile (here, the maximum or peak of the voltage profile) that travels along the sensor line (peak trace, PT). The PT width correlates with the object's vertical distance to the sensor line, as close objects create a large PT and distant objects a small PT, comparable with the effect of visual motion parallax. © 2021 IOP Publishing Ltd
Panoramic high dynamic range images in diverse environments
Meyer HG, Schwegmann A, Lindemann JP, Egelhaaf M. Panoramic high dynamic range images in diverse environments. Bielefeld University; 2014.This database contains 421 panoramic high dynamic range images recorded in diverse environments. The Images are panoramic in full 360° in azimuth and between -58° below and 47° above the horizon in elevation.
We used a spectral filter to limit the camera’s spectral sensitivity to wavelengths in the range of 480-560 nm (green). This filtering mimics the spectral sensitivity of photoreceptors R1-R6 of the fly that provide the input of the motion vision system. As a consequence, the mapping of colors to gray values in these images is similar to the green color channel in RGB images.
The raw images have a resolution of approximately 1 Megapixel (928x928) and 12-bit. The images have high dynamic range covering the entire brightness range encountered in natural environments (excluding the solar disc) After linearization the resulting image values had a dynamic range of 1:23,900 covering 3,955 intensity steps. Note, however, that the pixel brightness values cannot be recalculated to a SI unit like candela, though the values are proportional to luminance in the green spectral range. For more technical details about the recording of the image sequences see Meyer et al. (2014)
In addition to the raw camera images, unwrapped and linearized panorama images are provided with a resolution of 927 x 250.
Please refer to the readme.pdf contained in the data archive for detailed usage information.
Note: The .rar files are archives that can be opened with programs as Winrar or Winzip, for example.
Ref.:
Meyer, H. G., Schwegmann, A., Warzecha, A. K., & Egelhaaf, M. (2014). The textural principal components of natural scenes driving the pattern-dependent response modulations in motion-sensitive interneurons. Frontiers in Neuroscience (In Review
Evaluation of sEMG Signal Features and Segmentation Parameters for Limb Movement Prediction Using a Feedforward Neural Network
Limb movement prediction based on surface electromyography (sEMG) for the control of wearable robots, such as active orthoses and exoskeletons, is a promising approach since it provides an intuitive control interface for the user. Further, sEMG signals contain early information about the onset and course of limb movements for feedback control. Recent studies have proposed machine learning-based modeling approaches for limb movement prediction using sEMG signals, which do not necessarily require domain knowledge of the underlying physiological system and its parameters. However, there is limited information on which features of the measured sEMG signals provide the best prediction accuracy of machine learning models trained with these data. In this work, the accuracy of elbow joint movement prediction based on sEMG data using a simple feedforward neural network after training with different single- and multi-feature sets and data segmentation parameters was compared. It was shown that certain combinations of time-domain and frequency-domain features, as well as segmentation parameters of sEMG data, improve the prediction accuracy of the neural network as compared to the use of a standard feature set from the literature